Which Neural Net Architectures Give Rise To Exploding and Vanishing Gradients?

نویسنده

  • Boris Hanin
چکیده

We give a rigorous analysis of the statistical behavior of gradients in randomly initialized feed-forward networks with ReLU activations. Our results show that a fully connected depth d ReLU net with hidden layer widths nj will have exploding and vanishing gradients if and only if ∑d j=1 1/nj is large. The point of view of this article is that whether a given neural net will have exploding/vanishing gradients is a function mainly of the architecture of the net, and hence can be tested at initialization. Our results imply that a fully connected network that produces manageable gradients at initialization must have many hidden layers that are about as wide as the network is deep. This work is related to the mean field theory approach to random neural nets. From this point of view, we give a rigorous computation of the 1/nj corrections to the propagation of gradients at the so-called edge of chaos.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stable Architectures for Deep Neural Networks

Deep neural networks have become valuable tools for supervised machine learning, e.g., in the classification of text or images. While offering superior results over traditional techniques to find and express complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. An important issue that must be overcome is nume...

متن کامل

Iclr 2018 G Radients Explode - D Eep N Etworks Are Shallow

Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities “solve” the exploding gradient problem, we show that this is not the case in general and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice. We explain why ...

متن کامل

Gradients explode - Deep Networks are shallow - ResNet explained

Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities “solve” the exploding gradient problem, we show that this is not the case in general and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice. We explain why ...

متن کامل

On the difficulty of training recurrent neural networks

There are two widely known issues with properly training recurrent neural networks, the vanishing and the exploding gradient problems detailed in Bengio et al. (1994). In this paper we attempt to improve the understanding of the underlying issues by exploring these problems from an analytical, a geometric and a dynamical systems perspective. Our analysis is used to justify a simple yet effectiv...

متن کامل

The Shattered Gradients Problem: If resnets are the answer, then what is the question?

A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients. Although, the problem has largely been overcome via carefully constructed initializations and batch normalization, architectures incorporating skip-connections such as highway and resnets perform much better than standard feedforward architectures despite wellchosen initialization and batc...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1801.03744  شماره 

صفحات  -

تاریخ انتشار 2018